Goto

Collaborating Authors

 aggregate observation




Learning from Aggregate Observations

Neural Information Processing Systems

We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals. A well-known example is multiple instance learning (MIL). In this paper, we extend MIL beyond binary classification to other problems such as multiclass classification and regression. We present a general probabilistic framework that accommodates a variety of aggregate observations, e.g., pairwise similarity/triplet comparison for classification and mean/difference/rank observation for regression. Simple maximum likelihood solutions can be applied to various differentiable models such as deep neural networks and gradient boosting machines. Moreover, we develop the concept of consistency up to an equivalence relation to characterize our estimator and show that it has nice convergence properties under mild assumptions. Experiments on three problem settings --- classification via triplet comparison and regression via mean/rank observation indicate the effectiveness of the proposed method.




Review for NeurIPS paper: Learning from Aggregate Observations

Neural Information Processing Systems

Additional Feedback: Please respond to the above comments. The experiments showed a significant improvement in accuracy in the triplet comparison, but I did not understand why. Also, the significant improvement in the triplet comparison is a good result, but what are some specific/practical examples where the triplet comparison is given? After author feedback: I appreciate the authors' feedback. The task addressed in this work is not novel, but I think it has good contributions, that is, 1) provide a clear and general formulation for learning from aggregate observations, 2) discuss the theoretical aspects of the MLE in Section 4. For publication, I would like the authors to discuss the following issues: 1) As the other reviewer mentioned, Assumption 2 seems restrictive in some situations.


Learning from Aggregate Observations

Neural Information Processing Systems

We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals. A well-known example is multiple instance learning (MIL). In this paper, we extend MIL beyond binary classification to other problems such as multiclass classification and regression. We present a general probabilistic framework that accommodates a variety of aggregate observations, e.g., pairwise similarity/triplet comparison for classification and mean/difference/rank observation for regression. Simple maximum likelihood solutions can be applied to various differentiable models such as deep neural networks and gradient boosting machines.


A General Framework for Learning from Weak Supervision

Chen, Hao, Wang, Jindong, Feng, Lei, Li, Xiang, Wang, Yidong, Xie, Xing, Sugiyama, Masashi, Singh, Rita, Raj, Bhiksha

arXiv.org Artificial Intelligence

Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment. This paper introduces a general framework for learning from weak supervision (GLWS) with a novel algorithm. Central to GLWS is an Expectation-Maximization (EM) formulation, adeptly accommodating various weak supervision sources, including instance partial labels, aggregate statistics, pairwise observations, and unlabeled data. We further present an advanced algorithm that significantly simplifies the EM computational demands using a Non-deterministic Finite Automaton (NFA) along with a forward-backward algorithm, which effectively reduces time complexity from quadratic or factorial often required in existing solutions to linear scale. The problem of learning from arbitrary weak supervision is therefore converted to the NFA modeling of them. GLWS not only enhances the scalability of machine learning models but also demonstrates superior performance and versatility across 11 weak supervision scenarios. We hope our work paves the way for further advancements and practical deployment in this field.


A Universal Unbiased Method for Classification from Aggregate Observations

Wei, Zixi, Feng, Lei, Han, Bo, Liu, Tongliang, Niu, Gang, Zhu, Xiaofeng, Shen, Heng Tao

arXiv.org Artificial Intelligence

In conventional supervised classification, true labels are required for individual instances. However, it could be prohibitive to collect the true labels for individual instances, due to privacy concerns or unaffordable annotation costs. This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances. CFAO is a generalized learning framework that contains various learning problems, such as multiple-instance learning and learning from label proportions. The goal of this paper is to present a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses -- previous research failed to achieve this goal. Practically, our method works by weighing the importance of each label for each instance in the group, which provides purified supervision for the classifier to learn. Theoretically, our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses. Extensive experiments on various problems of CFAO demonstrate the superiority of our proposed method.


Filtering for Aggregate Hidden Markov Models with Continuous Observations

Zhang, Qinsheng, Singh, Rahul, Chen, Yongxin

arXiv.org Machine Learning

We consider a class of filtering problems for large populations where each individual is modeled by the same hidden Markov model (HMM). In this paper, we focus on aggregate inference problems in HMMs with discrete state space and continuous observation space. The continuous observations are aggregated in a way such that the individuals are indistinguishable from measurements. We propose an aggregate inference algorithm called continuous observation collective forward-backward algorithm. It extends the recently proposed collective forward-backward algorithm for aggregate inference in HMMs with discrete observations to the case of continuous observations. The efficacy of this algorithm is illustrated through several numerical experiments.